我们提出了一个数据集,该数据集包含具有唯一对象标识(IDS)的对象注释,用于高效视频编码(HEVC)V1常见测试条件(CTC)序列。准备了13个序列的地面实际注释并作为称为SFU-HW-Tracks-V1的数据集发布。对于每个视频帧,地面真相注释包括对象类ID,对象ID和边界框位置及其维度。数据集可用于评估未压缩视频序列上的对象跟踪性能,并研究视频压缩与对象跟踪之间的关系。
translated by 谷歌翻译
When simulating soft robots, both their morphology and their controllers play important roles in task performance. This paper introduces a new method to co-evolve these two components in the same process. We do that by using the hyperNEAT algorithm to generate two separate neural networks in one pass, one responsible for the design of the robot body structure and the other for the control of the robot. The key difference between our method and most existing approaches is that it does not treat the development of the morphology and the controller as separate processes. Similar to nature, our method derives both the "brain" and the "body" of an agent from a single genome and develops them together. While our approach is more realistic and doesn't require an arbitrary separation of processes during evolution, it also makes the problem more complex because the search space for this single genome becomes larger and any mutation to the genome affects "brain" and the "body" at the same time. Additionally, we present a new speciation function that takes into consideration both the genotypic distance, as is the standard for NEAT, and the similarity between robot bodies. By using this function, agents with very different bodies are more likely to be in different species, this allows robots with different morphologies to have more specialized controllers since they won't crossover with other robots that are too different from them. We evaluate the presented methods on four tasks and observe that even if the search space was larger, having a single genome makes the evolution process converge faster when compared to having separated genomes for body and control. The agents in our population also show morphologies with a high degree of regularity and controllers capable of coordinating the voxels to produce the necessary movements.
translated by 谷歌翻译
Taking into account background knowledge as the context has always been an important part of solving tasks that involve natural language. One representative example of such tasks is text-based games, where players need to make decisions based on both description text previously shown in the game, and their own background knowledge about the language and common sense. In this work, we investigate not simply giving common sense, as can be seen in prior research, but also its effective usage. We assume that a part of the environment states different from common sense should constitute one of the grounds for action selection. We propose a novel agent, DiffG-RL, which constructs a Difference Graph that organizes the environment states and common sense by means of interactive objects with a dedicated graph encoder. DiffG-RL also contains a framework for extracting the appropriate amount and representation of common sense from the source to support the construction of the graph. We validate DiffG-RL in experiments with text-based games that require common sense and show that it outperforms baselines by 17% of scores. The code is available at https://github.com/ibm/diffg-rl
translated by 谷歌翻译
Our team, Hibikino-Musashi@Home (the shortened name is HMA), was founded in 2010. It is based in the Kitakyushu Science and Research Park, Japan. We have participated in the RoboCup@Home Japan open competition open platform league every year since 2010. Moreover, we participated in the RoboCup 2017 Nagoya as open platform league and domestic standard platform league teams. Currently, the Hibikino-Musashi@Home team has 20 members from seven different laboratories based in the Kyushu Institute of Technology. In this paper, we introduce the activities of our team and the technologies.
translated by 谷歌翻译
使用三维(3D)图像传感器的智能监视一直在智能城市的背景下引起人们的注意。在智能监控中,实施了3D图像传感器获取的点云数据的对象检测,以检测移动物体(例如车辆和行人)以确保道路上的安全性。但是,由于光检测和范围(LIDAR)单元用作3D图像传感器或3D图像传感器的安装位置,因此点云数据的特征是多元化的。尽管迄今已研究了从点云数据进行对象检测的各种深度学习(DL)模型,但尚无研究考虑如何根据点云数据的功能使用多个DL模型。在这项工作中,我们提出了一个基于功能的模型选择框架,该框架通过使用多种DL方法并利用两种人工技术生成的伪不完整的训练数据来创建各种DL模型:采样和噪声添加。它根据在真实环境中获取的点云数据的功能,为对象检测任务选择最合适的DL模型。为了证明提出的框架的有效性,我们使用从KITTI数据集创建的基准数据集比较了多个DL模型的性能,并比较了通过真实室外实验获得的对象检测的示例结果。根据情况,DL模型之间的检测准确性高达32%,这证实了根据情况选择适当的DL模型的重要性。
translated by 谷歌翻译
本文档描述了Spotify出于学术研究目的发布的葡萄牙语播客数据集。我们概述了如何采样数据,有关集合的一些基本统计数据,以及有关巴西和葡萄牙方言的分发信息的简要信息。
translated by 谷歌翻译
深度神经网络(DNN)众所周知,很容易受到对抗例子的影响(AES)。此外,AE具有对抗性可传递性,这意味着为源模型生成的AE可以以非平凡的概率欺骗另一个黑框模型(目标模型)。在本文中,我们首次研究了包括Convmixer在内的模型之间的对抗性转移性的属性。为了客观地验证可转让性的属性,使用称为AutoAttack的基准攻击方法评估模型的鲁棒性。在图像分类实验中,Convmixer被确认对对抗性转移性较弱。
translated by 谷歌翻译
场景中光的极化信息对于各种图像处理和计算机视觉任务很有价值。平面偏光仪是一种有前途的方法,可以一次性地捕获不同方向的极化图像,而它需要颜色极化的表现。在本文中,我们提出了一个两步的颜色偏振化学网络〜(TCPDNET),该网络由两个颜色的表演和极化演示组成。我们还引入了YCBCR颜色空间中的重建损失,以提高TCPDNET的性能。实验比较表明,TCPDNET在极化图像的图像质量和Stokes参数的准确性方面优于现有方法。
translated by 谷歌翻译
实现接近真实机器人的高度准确的运动学或模拟器模型可以促进基于模型的控制(例如,模型预测性控制或线性质量调节器),基于模型的轨迹计划(例如,轨迹优化),并减少增强学习方法所需的学习时间。因此,这项工作的目的是学习运动学和/或模拟器模型与真实机器人之间的残余误差。这是使用自动调节和神经网络实现的,其中使用自动调整方法更新神经网络的参数,该方法应用了从无味的Kalman滤波器(UKF)公式进行方程式。使用此方法,我们仅使用少量数据对这些残差错误进行建模 - 当我们直接从硬件操作中学习改善模拟器/运动学模型时,这是必要的。我们演示了关于机器人硬件(例如操纵器组)的方法,并表明,通过学习的残差错误,我们可以进一步缩小运动学模型,模拟和真实机器人之间的现实差距。
translated by 谷歌翻译
深度神经网络(DNN)众所周知,很容易受到对抗例子的影响(AES)。此外,AE具有对抗性转移性,即为源模型傻瓜(目标)模型生成的AE。在本文中,我们首次研究了为对抗性强大防御的模型的可传递性。为了客观地验证可转让性的属性,使用称为AutoAttack的基准攻击方法评估模型的鲁棒性。在图像分类实验中,使用加密模型的使用不仅是对AE的鲁棒性,而且还可以减少AES在模型的可传递性方面的影响。
translated by 谷歌翻译